Priors on the Variance in Sparse Bayesian Learning; the demi-Bayesian Lasso

نویسندگان

  • Suhrid Balakrishnan
  • David Madigan
چکیده

We explore the use of proper priors for variance parameters of certain sparse Bayesian regression models. This leads to a connection between sparse Bayesian learning (SBL) models (Tipping, 2001) and the recently proposed Bayesian Lasso (Park and Casella, 2008). We outline simple modifications of existing algorithms to solve this new variant which essentially uses type-II maximum likelihood to fit the Bayesian Lasso model. We also propose an Elastic-net (Zou and Hastie, 2005) heuristic to help with modeling correlated inputs. Experimental results show the proposals to compare favorably to both the Lasso and traditional and more recent sparse Bayesian algorithms.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Handling Sparsity via the Horseshoe

This paper presents a general, fully Bayesian framework for sparse supervised-learning problems based on the horseshoe prior. The horseshoe prior is a member of the family of multivariate scale mixtures of normals, and is therefore closely related to widely used approaches for sparse Bayesian learning, including, among others, Laplacian priors (e.g. the LASSO) and Student-t priors (e.g. the rel...

متن کامل

Bayesian Sample size Determination for Longitudinal Studies with Continuous Response using Marginal Models

Introduction Longitudinal study designs are common in a lot of scientific researches, especially in medical, social and economic sciences. The reason is that longitudinal studies allow researchers to measure changes of each individual over time and often have higher statistical power than cross-sectional studies. Choosing an appropriate sample size is a crucial step in a successful study. A st...

متن کامل

Bayesian shrinkage

Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimension...

متن کامل

Bayesian structure learning in graphical models

We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p exceeds the sample size n. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the underlying graph. A popular non-Bayesian method of estimating a graphical structure i...

متن کامل

Bayesian estimation of a sparse precision matrix

We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p is large. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the underlying graph. A popular non-Bayesian method of estimating a graphical structure is given by the gr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009